Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Person re-identification method based on multi-modal graph convolutional neural network
Jiaming HE, Jucheng YANG, Chao WU, Xiaoning YAN, Nenghua XU
Journal of Computer Applications    2023, 43 (7): 2182-2189.   DOI: 10.11772/j.issn.1001-9081.2022060827
Abstract393)   HTML35)    PDF (1887KB)(255)       Save

Aiming at the problems that person textual attribute information is not fully utilized and the semantic relationships among the textual attributes are not mined in person re-identification, a person re-identification method based on multi-modal Graph Convolutional neural Network (GCN) was proposed. Firstly, Deep Convolutional Neural Network (DCNN) was used to learn person textual attributes and person image features. Then, with the help of the effective relationship mining ability of GCN, the textual attribute features and image features were treated as the input of GCN, and the semantic information of the textual attribute nodes was transferred through the graph convolution operation, so as to learn the implicit semantic relationship information among the textual attributes and incorporate this semantic information into image features. Finally, the robust person features were output by GCN. The multi-modal person re-identification method achieves the mean Average Precision (mAP) of 87.6% and the Rank-1 accuracy of 95.1% on Market-1501 dataset, and achieves the mAP of 77.3% and the Rank-1 accuracy of 88.4% on DukeMTMC-reID dataset, which verify the effectiveness of the proposed method.

Table and Figures | Reference | Related Articles | Metrics
Animation video generation model based on Chinese impressionistic style transfer
Wentao MAO, Guifang WU, Chao WU, Zhi DOU
Journal of Computer Applications    2022, 42 (7): 2162-2169.   DOI: 10.11772/j.issn.1001-9081.2021050836
Abstract582)   HTML17)    PDF (5691KB)(251)       Save

At present, Generative Adversarial Network (GAN) has been used for image animation style transformation. However, most of the existing GAN-based animation generation models mainly focus on the extraction and generation of realistic style with the targets of Japanese animations and American animations. Very little attention of the model is paid to the transfer of impressionistic style in Chinese-style animations, which limits the application of GAN in the domestic animation production market. To solve the problem, a new Chinese-style animation GAN model, namely Chinese Cartoon GAN (CCGAN), was proposed for the automatic generation of animation videos with Chinese impressionistic style by integrating Chinese impressionistic style into GAN model. Firstly, by adding the inverted residual blocks into the generator, a lightweight deep neural network model was constructed to reduce the computational cost of video generation. Secondly, in order to extract and transfer the characteristics of Chinese impressionistic style, such as sharp image edges, abstract content structure and stroke lines with ink texture, the gray-scale style loss and color reconstruction loss were constructed in the generator to constrain the high-level semantic consistency in style between the real images and the Chinese-style sample images. Moreover, in the discriminator, the gray-scale adversarial loss and edge-promoting adversarial loss were constructed to constrain the reconstructed image for maintaining the same edge characteristics of the sample images. Finally, the Adam algorithm was used to minimize the above loss functions to realize style transfer, and the reconstructed images were combined into video. Experimental results show that, compared with the current representative style transfer models such as CycleGAN and CartoonGAN, the proposed CCGAN can effectively learn the Chinese impressionistic style from Chinese-style animations such as Chinese Choir and significantly reduce the computational cost, indicating that the proposed CCGAN is suitable for the rapid generation of animation videos with large quantities.

Table and Figures | Reference | Related Articles | Metrics
Self-elasticity cloud platform based on OpenStack and Cloudify
PEI Chao WU Yingchuan LIU Zhiqin WANG Yaobin YANG Lei
Journal of Computer Applications    2014, 34 (6): 1582-1586.   DOI: 10.11772/j.issn.1001-9081.2014.06.1582
Abstract223)      PDF (833KB)(376)       Save

Under the condition of being confronted with highly concurrent requests, the existing Web services would bring about the increase of the response time, even the problem that server goes down. To solve this problem, a kind of distributed self-elasticity architecture for the Web system named ECAP (self-Elasticity Cloud Application Platform) was proposed based on cloud computing. The architecture built on the Infrastructure as a Service (IaaS) platform of OpenStack. It combined Platform as a Service (PaaS) platform of Cloudify to realize the ECAP. In addition, it realized the fuzzy analytic hierarchy scheduling method by building the fuzzy matrix in the scale values of virtual machine resource template. At last, the test applications were uploaded in the cloud platform, and the test analysis was given by using the tool of pressure test. The experimental result shows that ECAP performs better in the average response time and the load performance than that of the common application server.

Reference | Related Articles | Metrics